Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI/Build] Use python 3.12 in cuda image #8133

Merged
merged 11 commits into from
Sep 7, 2024
Merged

Conversation

joerunde
Copy link
Contributor

@joerunde joerunde commented Sep 3, 2024

This PR updates the PYTHON_VERSION default arg in the cuda dockerfile to 3.12

We suspect this may improve performance, as IBM-built vlm images using python 3.11 instead of 3.10 are faster, and the Python 3.11 release was performance focused, see: https://docs.python.org/3/whatsnew/3.11.html

Python 3.12 is the latest stable release, and also included more performance fixes for asyncio:

The asyncio package has had a number of performance improvements, with some benchmarks showing a 75% speed up.

It looks like all of the images in this repo set their python versions independently, maybe we should update all of them to 3.12 as well, but I wanted to keep the blast radius small here.
I'm trying to build this image locally to test it out, but it may be faster to just enable CI and see what happens 😉

Might provide a very small mitigation for #8147, looks like a ~1% throughput improvement from quick tests


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

@joerunde joerunde marked this pull request as ready for review September 3, 2024 21:00
Copy link

github-actions bot commented Sep 3, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping @simon-mo or @khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@simon-mo simon-mo requested a review from khluu September 3, 2024 21:03
@khluu khluu added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 3, 2024
Signed-off-by: Joe Runde <[email protected]>
# A newer setuptools is required for installing some test dependencies from source that do not publish python 3.12 wheels
# This installation must complete before the test dependencies are collected and installed.
RUN --mount=type=cache,target=/root/.cache/pip \
python3 -m pip install "setuptools>=74.1.1"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The build without this failed with

[2024-09-03T22:25:50Z] #37 2.608 Collecting transformers_stream_generator (from -r requirements-test.txt (line 24))
[2024-09-03T22:25:50Z] #37 2.618   Downloading transformers-stream-generator-0.0.5.tar.gz (13 kB)
[2024-09-03T22:25:50Z] #37 2.627   Preparing metadata (setup.py): started
[2024-09-03T22:25:50Z] #37 2.646   Preparing metadata (setup.py): finished with status 'error'
[2024-09-03T22:25:50Z] #37 2.648   error: subprocess-exited-with-error
[2024-09-03T22:25:50Z] #37 2.648   
[2024-09-03T22:25:50Z] #37 2.648   × python setup.py egg_info did not run successfully.
[2024-09-03T22:25:50Z] #37 2.648   │ exit code: 1
[2024-09-03T22:25:50Z] #37 2.648   ╰─> [1 lines of output]
[2024-09-03T22:25:50Z] #37 2.648       ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
[2024-09-03T22:25:50Z] #37 2.648       [end of output]
[2024-09-03T22:25:50Z] #37 2.648   
[2024-09-03T22:25:50Z] #37 2.648   note: This error originates from a subprocess, and is likely not a problem with pip.
[2024-09-03T22:25:50Z] #37 2.657 error: metadata-generation-failed

I verified this was because the vllm-base target never updates setuptools, so it still has version 45.2.0 installed, which causes this error with a very misleading message.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder why setuptools is at 45.2.0 specifically and why being on an older version leads to this error message? The message doesn't mention anything about version being outdated/incompatible, only that setuptools is nto in the build env.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it took some googling around to find that others ran into the same error when setuptools is just out of date- I don't know why that's the case but I'd guess it's just some api incompatibility that's not quite correctly handled.

You can reproduce by installing setuptools==45.2.0 and then trying to install transformers-stream-generator in a fresh python 3.12 venv

I wonder why setuptools is at 45.2.0 specifically

Yeah idk, from the build logs it looks like that's just what's installed when we do apt-get install -y python3.12 python3.12-dev python3.12-venv, I see this in the log:

[2024-09-05T21:34:56Z] #10 11.07 Get:191 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 python3-setuptools all 45.2.0-1ubuntu0.1 [330 kB]

@khluu khluu removed the ready ONLY add when PR is ready to merge/full CI is needed label Sep 3, 2024
@youkaichao
Copy link
Member

is 3.12 too agressive? some packages might not publish 3.12 wheels.

I think 3.11 should be safe to upgrade.

@joerunde
Copy link
Contributor Author

joerunde commented Sep 4, 2024

@youkaichao solid maybe

I'm looking through all the failures right now to see if anything is really bad, but in general I'd expect that since 3.12 has been stable for a year and we're less than a month away from the 3.13 release, packages should be out for 3.12 by now for anything that's worth depending on. If we have dependencies that are not 3.12 compatible, it'd be better to question why we're including them if they're not regularly maintained.

edit: so far it looks like it's just the six package that's problematic, but only because we're not installing the latest version in the image. But that could be hiding other errors, 🤞

@joerunde
Copy link
Contributor Author

joerunde commented Sep 4, 2024

Alright, fastcheck tests are passing and a quick check with a llama 7b model on an A100 at least shows that it's not slower:

python qps median ttft median tpot tput (tok/s)
3.12 4 46.96 13.76 622.67
3.12 16 62.54 27.79 1436.32
3.10 4 47.07 14.11 622.29
3.10 16 67.26 29.24 1404.17

At 4 qps the results are the same but it looks a little better at 16 qps. I think this tracks, higher loads should run into contention with the cpu more.

I could beg for some more gpus to run some tp tests with larger models if anybody wants more data before merging.

@simon-mo @khluu think we can at least run the full CI suite here?

@khluu khluu added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 4, 2024
@khluu
Copy link
Collaborator

khluu commented Sep 4, 2024

Marked ready so we can run full suite

@@ -177,7 +177,7 @@ def test_custom_logging_config_is_parsed_and_used_when_provided():
logging_config_file.name), patch(
"logging.config.dictConfig") as dict_config_mock:
_configure_vllm_root_logger()
assert dict_config_mock.called_with(valid_logging_config)
dict_config_mock.assert_called_with(valid_logging_config)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is failing and I'm not sure what the expected behavior of this test was before, looking into it a bit to try to understand

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think that the mock on logging.config.dictConfig was incorrect, because vllm.logger imports that eagerly, so we instead have to mock the local vllm.logger.dictConfig.

The test was passing before because dict_config_mock.called_with was returning a mock, which was then passing the assert since it's not falsey.

@joerunde
Copy link
Contributor Author

joerunde commented Sep 6, 2024

I got my hands on 2 A100s and ran a quick comparison with llama2-70b, tp=2

python qps median ttft median tpot tput (tok/s)
3.12 4 273.99 98.26 959.07
3.12 16 3157.57 164.87 1267.78
3.10 4 308.58 102.64 953.74
3.10 16 3211.16 168.52 1249.13

Looks faster 🚀🚀🚀

Copy link
Collaborator

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - maybe we should update the minimum version of setuptools in the common requirements

@joerunde
Copy link
Contributor Author

joerunde commented Sep 6, 2024

@mgoin we could, I just wasn't sure if we wanted to include setuptools as a dependency. It's in requirements-build.txt atm, so it does get updated for the dockerfile targets that build vllm, but not in the ones that consume it. It only seems to be a problem for installing a dependency that's currently listed as only a test dependency.

What I want to say here is that, presumably the 3.12 wheels are already working and I don't want to change them. But, since I did have to explicitly add six to the common requirements, maybe nobody is looking very closely at the 3.12 wheels 🤔

I don't really feel strongly either way- mostly I'd rather not have to wait for another build lol. Let me know if you want that change in and I'll add it.

@khluu khluu merged commit cfe712b into vllm-project:main Sep 7, 2024
69 checks passed
opus24 added a commit to Hyper-Accel/vllm that referenced this pull request Sep 10, 2024
commit a1d8742
Author: Simon Mo <[email protected]>
Date:   Mon Sep 9 23:21:00 2024 -0700

    Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319)

commit 6cd5e5b
Author: Dipika Sikka <[email protected]>
Date:   Mon Sep 9 23:02:52 2024 -0400

    [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217)

commit c7cb5c3
Author: Kyle Sayers <[email protected]>
Date:   Mon Sep 9 16:27:26 2024 -0400

    [Misc] GPTQ Activation Ordering (vllm-project#8135)

commit f9b4a2d
Author: Vladislav Kruglikov <[email protected]>
Date:   Mon Sep 9 21:20:46 2024 +0300

    [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292)

commit 58fcc85
Author: Adam Lugowski <[email protected]>
Date:   Mon Sep 9 11:16:37 2024 -0700

    [Frontend] Add progress reporting to run_batch.py (vllm-project#8060)

    Co-authored-by: Adam Lugowski <[email protected]>

commit 08287ef
Author: Kyle Mistele <[email protected]>
Date:   Mon Sep 9 09:45:11 2024 -0500

    [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272)

commit 4ef41b8
Author: Alexander Matveev <[email protected]>
Date:   Sun Sep 8 00:01:51 2024 -0400

    [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267)

commit cfe712b
Author: Joe Runde <[email protected]>
Date:   Sat Sep 7 14:03:16 2024 -0600

    [CI/Build] Use python 3.12 in cuda image (vllm-project#8133)

    Signed-off-by: Joe Runde <[email protected]>

commit b962ee1
Author: sumitd2 <[email protected]>
Date:   Sat Sep 7 23:48:40 2024 +0530

    ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026)

commit 36bf815
Author: Isotr0py <[email protected]>
Date:   Sun Sep 8 01:45:44 2024 +0800

    [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269)

commit e807125
Author: Isotr0py <[email protected]>
Date:   Sat Sep 7 16:38:23 2024 +0800

    [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201)

commit 9f68e00
Author: Cyrus Leung <[email protected]>
Date:   Sat Sep 7 16:02:39 2024 +0800

    [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258)

commit ce2702a
Author: youkaichao <[email protected]>
Date:   Fri Sep 6 22:40:46 2024 -0700

    [tpu][misc] fix typo (vllm-project#8260)

commit 795b662
Author: Wei-Sheng Chin <[email protected]>
Date:   Fri Sep 6 20:18:16 2024 -0700

    Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241)

commit 2f707fc
Author: Cyrus Leung <[email protected]>
Date:   Sat Sep 7 10:57:24 2024 +0800

    [Model] Multi-input support for LLaVA (vllm-project#8238)

commit 41e95c5
Author: Kyle Mistele <[email protected]>
Date:   Fri Sep 6 21:49:01 2024 -0500

    [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256)

    Co-authored-by: Kyle Mistele <[email protected]>

commit 12dd715
Author: William Lin <[email protected]>
Date:   Fri Sep 6 17:48:48 2024 -0700

    [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943)

commit 29f49cd
Author: Patrick von Platen <[email protected]>
Date:   Sat Sep 7 01:02:05 2024 +0200

    [Model] Allow loading from original Mistral format (vllm-project#8168)

    Co-authored-by: Michael Goin <[email protected]>

commit 23f3222
Author: Dipika Sikka <[email protected]>
Date:   Fri Sep 6 18:29:03 2024 -0400

    [Misc] Remove `SqueezeLLM` (vllm-project#8220)

commit 9db52ea
Author: rasmith <[email protected]>
Date:   Fri Sep 6 17:26:09 2024 -0500

    [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248)

commit 1447c97
Author: Alexey Kondratiev(AMD) <[email protected]>
Date:   Fri Sep 6 14:51:03 2024 -0400

    [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203)

commit de80783
Author: Rui Qiao <[email protected]>
Date:   Fri Sep 6 09:18:35 2024 -0700

    [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938)

commit e5cab71
Author: afeldman-nm <[email protected]>
Date:   Fri Sep 6 12:01:14 2024 -0400

    [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191)

commit baa5467
Author: Nick Hill <[email protected]>
Date:   Thu Sep 5 20:39:29 2024 -0700

    [BugFix] Fix Granite model configuration (vllm-project#8216)

commit db3bf7c
Author: Jiaxin Shan <[email protected]>
Date:   Thu Sep 5 18:10:33 2024 -0700

    [Core] Support load and unload LoRA in api server (vllm-project#6566)

    Co-authored-by: Jee Jee Li <[email protected]>

commit 2febcf2
Author: sroy745 <[email protected]>
Date:   Thu Sep 5 13:25:29 2024 -0700

    [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962)

commit 2ee4528
Author: Michael Goin <[email protected]>
Date:   Thu Sep 5 11:09:46 2024 -0400

    Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165)

commit 9da25a8
Author: Alex Brooks <[email protected]>
Date:   Thu Sep 5 06:48:10 2024 -0600

    [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029)

    Signed-off-by: Alex-Brooks <[email protected]>
    Co-authored-by: DarkLight1337 <[email protected]>

commit 8685ba1
Author: [email protected] <[email protected]>
Date:   Thu Sep 5 17:03:37 2024 +0530

    Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860)

commit 288a938
Author: Cyrus Leung <[email protected]>
Date:   Thu Sep 5 18:51:53 2024 +0800

    [Doc] Indicate more information about supported modalities (vllm-project#8181)

commit e39ebf5
Author: Elfie Guo <[email protected]>
Date:   Wed Sep 4 22:12:26 2024 -0700

    [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173)

commit ba262c4
Author: Kevin H. Luu <[email protected]>
Date:   Wed Sep 4 20:33:12 2024 -0700

    [ci] Mark LoRA test as soft-fail (vllm-project#8160)

    Signed-off-by: kevin <[email protected]>

commit 4624d98
Author: Woosuk Kwon <[email protected]>
Date:   Wed Sep 4 20:31:48 2024 -0700

    [Misc] Clean up RoPE forward_native (vllm-project#8076)

commit 1afc931
Author: William Lin <[email protected]>
Date:   Wed Sep 4 17:35:36 2024 -0700

    [bugfix] >1.43 constraint for openai (vllm-project#8169)

    Co-authored-by: Michael Goin <[email protected]>

commit e01c2be
Author: Maureen McElaney <[email protected]>
Date:   Wed Sep 4 19:50:13 2024 -0400

    [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161)

commit 32e7db2
Author: Simon Mo <[email protected]>
Date:   Wed Sep 4 16:34:27 2024 -0700

    Bump version to v0.6.0 (vllm-project#8166)

commit 008cf88
Author: Harsha vardhan manoj Bikki <[email protected]>
Date:   Wed Sep 4 16:33:43 2024 -0700

    [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062)

    Co-authored-by: Harsha Bikki <[email protected]>

commit 77d9e51
Author: Cody Yu <[email protected]>
Date:   Wed Sep 4 13:23:22 2024 -0700

    [MISC] Replace input token throughput with total token throughput (vllm-project#8164)

    Co-authored-by: Michael Goin <[email protected]>

commit e02ce49
Author: Kyle Mistele <[email protected]>
Date:   Wed Sep 4 15:18:13 2024 -0500

    [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649)

    Co-authored-by: constellate <[email protected]>
    Co-authored-by: Kyle Mistele <[email protected]>

commit 561d6f8
Author: Woosuk Kwon <[email protected]>
Date:   Wed Sep 4 13:05:50 2024 -0700

    [CI] Change test input in Gemma LoRA test (vllm-project#8163)

commit d1dec64
Author: alexeykondrat <[email protected]>
Date:   Wed Sep 4 14:57:54 2024 -0400

    [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369)

    Co-authored-by: Simon Mo <[email protected]>

commit 2ad2e56
Author: Cody Yu <[email protected]>
Date:   Wed Sep 4 11:53:25 2024 -0700

    [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131)

commit d331156
Author: wnma <[email protected]>
Date:   Wed Sep 4 18:55:37 2024 +0800

    [Bugfix] remove post_layernorm in siglip (vllm-project#8106)

commit ccd7207
Author: TimWang <[email protected]>
Date:   Wed Sep 4 14:17:05 2024 +0800

    chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103)

commit 855c262
Author: Cyrus Leung <[email protected]>
Date:   Wed Sep 4 13:22:17 2024 +0800

    [Frontend] Multimodal support in offline chat (vllm-project#8098)

commit 2be8ec6
Author: Peter Salas <[email protected]>
Date:   Tue Sep 3 21:38:21 2024 -0700

    [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963)

commit e16fa99
Author: Dipika Sikka <[email protected]>
Date:   Tue Sep 3 22:12:41 2024 -0400

    [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972)

    Co-authored-by: Michael Goin <[email protected]>

commit 61f4a93
Author: Woosuk Kwon <[email protected]>
Date:   Tue Sep 3 18:35:33 2024 -0700

    [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137)

commit d4db9f5
Author: Nick Hill <[email protected]>
Date:   Tue Sep 3 17:57:41 2024 -0700

    [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964)

commit 2188a60
Author: Dipika Sikka <[email protected]>
Date:   Tue Sep 3 17:21:44 2024 -0400

    [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976)

commit dc0b606
Author: Simon Mo <[email protected]>
Date:   Tue Sep 3 14:11:42 2024 -0700

    [CI] Change PR remainder to avoid at-mentions (vllm-project#8134)

commit 0af3abe
Author: Woosuk Kwon <[email protected]>
Date:   Tue Sep 3 13:29:24 2024 -0700

    [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128)

commit f1575dc
Author: Kevin H. Luu <[email protected]>
Date:   Tue Sep 3 13:25:09 2024 -0700

    [ci] Fix GHA workflow  (vllm-project#8129)

    Signed-off-by: kevin <[email protected]>

commit c02638e
Author: tomeras91 <[email protected]>
Date:   Tue Sep 3 22:37:08 2024 +0300

    [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118)

commit 652c83b
Author: Antoni Baum <[email protected]>
Date:   Tue Sep 3 12:28:25 2024 -0700

    [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750)

commit 6d646d0
Author: Alexander Matveev <[email protected]>
Date:   Tue Sep 3 14:50:29 2024 -0400

    [Core] Optimize Async + Multi-step (vllm-project#8050)

commit 95a178f
Author: Kevin H. Luu <[email protected]>
Date:   Tue Sep 3 11:32:27 2024 -0700

    [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124)

    Signed-off-by: kevin <[email protected]>

commit bd852f2
Author: Cody Yu <[email protected]>
Date:   Tue Sep 3 10:49:18 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120)

    Co-authored-by: Tao He <[email protected]>
    Co-authored-by: Juelianqvq <[email protected]>

commit ec26653
Author: Isotr0py <[email protected]>
Date:   Tue Sep 3 21:37:52 2024 +0800

    [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061)

commit 0fbc669
Author: Woosuk Kwon <[email protected]>
Date:   Mon Sep 2 20:35:42 2024 -0700

    [Bugfix] Fix single output condition in output processor (vllm-project#7881)

commit 6e36f4f
Author: wang.yuqi <[email protected]>
Date:   Tue Sep 3 05:20:12 2024 +0800

    improve chunked prefill performance

    [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874)

commit dd2a6a8
Author: Isotr0py <[email protected]>
Date:   Mon Sep 2 23:48:56 2024 +0800

    [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055)

commit 4ca65a9
Author: Isotr0py <[email protected]>
Date:   Mon Sep 2 20:43:26 2024 +0800

    [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056)

commit e2b2aa5
Author: Woosuk Kwon <[email protected]>
Date:   Sun Sep 1 23:09:46 2024 -0700

    [TPU] Align worker index with node boundary (vllm-project#7932)

commit e6a26ed
Author: Lily Liu <[email protected]>
Date:   Sun Sep 1 21:23:29 2024 -0700

    [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244)

commit f8d6014
Author: Shawn Tan <[email protected]>
Date:   Sun Sep 1 21:37:18 2024 -0400

    [Model] Add Granite model (vllm-project#7436)

    Co-authored-by: Nick Hill <[email protected]>

commit 5b86b19
Author: Roger Wang <[email protected]>
Date:   Sun Sep 1 14:46:57 2024 -0700

    [Misc] Optional installation of audio related packages (vllm-project#8063)

commit 5231f08
Author: Roger Wang <[email protected]>
Date:   Sat Aug 31 16:35:53 2024 -0700

    [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049)

commit 8423aef
Author: Robert Shaw <[email protected]>
Date:   Sat Aug 31 15:44:03 2024 -0400

    [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059)

commit 4f5d844
Author: Nicolò Lucchesi <[email protected]>
Date:   Sat Aug 31 09:27:58 2024 +0200

    [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037)

commit d05f0a9
Author: Cyrus Leung <[email protected]>
Date:   Sat Aug 31 13:26:55 2024 +0800

    [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052)

commit 622f8ab
Author: Pavani Majety <[email protected]>
Date:   Fri Aug 30 22:18:50 2024 -0700

    [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013)

commit 1248e85
Author: Wenxiang <[email protected]>
Date:   Sat Aug 31 03:42:57 2024 +0800

    [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729)

    Co-authored-by: Your Name <[email protected]>
    Co-authored-by: Zeqi Lin <[email protected]>
    Co-authored-by: Zeqi Lin <[email protected]>

commit 2684efc
Author: Woosuk Kwon <[email protected]>
Date:   Fri Aug 30 09:01:26 2024 -0700

    [TPU][Bugfix] Fix tpu type api (vllm-project#8035)

commit 058344f
Author: Kaunil Dhruv <[email protected]>
Date:   Fri Aug 30 08:21:02 2024 -0700

    [Frontend]-config-cli-args (vllm-project#7737)

    Co-authored-by: Cyrus Leung <[email protected]>
    Co-authored-by: Kaunil Dhruv <[email protected]>

commit 98cef6a
Author: Cyrus Leung <[email protected]>
Date:   Fri Aug 30 23:20:34 2024 +0800

    [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028)

commit f97be32
Author: Jungho Christopher Cho <[email protected]>
Date:   Sat Aug 31 00:19:27 2024 +0900

    [VLM][Model] TP support for ViTs (vllm-project#7186)

    Co-authored-by: Roger Wang <[email protected]>
    Co-authored-by: Roger Wang <[email protected]>

commit afd39a4
Author: Cyrus Leung <[email protected]>
Date:   Fri Aug 30 23:03:28 2024 +0800

    [Bugfix] Fix import error in Exaone model (vllm-project#8034)

commit 2148441
Author: Richard Liu <[email protected]>
Date:   Fri Aug 30 00:27:40 2024 -0700

    [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613)

commit dc13e99
Author: Yohan Na <[email protected]>
Date:   Fri Aug 30 15:34:20 2024 +0900

    [MODEL] add Exaone model support (vllm-project#7819)

commit 34a0e96
Author: Avshalom Manevich <[email protected]>
Date:   Fri Aug 30 11:11:39 2024 +0700

    [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995)

commit 80c7b08
Author: Woosuk Kwon <[email protected]>
Date:   Thu Aug 29 19:35:29 2024 -0700

    [TPU] Async output processing for TPU (vllm-project#8011)

commit 428dd14
Author: afeldman-nm <[email protected]>
Date:   Thu Aug 29 22:19:08 2024 -0400

    [Core] Logprobs support in Multi-step (vllm-project#7652)

commit 4abed65
Author: Cyrus Leung <[email protected]>
Date:   Fri Aug 30 08:49:04 2024 +0800

    [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998)

commit 0c785d3
Author: Wei-Sheng Chin <[email protected]>
Date:   Thu Aug 29 16:48:11 2024 -0700

    Add more percentiles and latencies (vllm-project#7759)

commit 4664cea
Author: chenqianfzh <[email protected]>
Date:   Thu Aug 29 16:09:08 2024 -0700

    support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445)

commit 257afc3
Author: Harsha vardhan manoj Bikki <[email protected]>
Date:   Thu Aug 29 13:58:14 2024 -0700

    [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885)

    Co-authored-by: Harsha Bikki <[email protected]>

commit 86a677d
Author: Dipika Sikka <[email protected]>
Date:   Thu Aug 29 16:46:55 2024 -0400

    [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973)

commit d78789a
Author: Isotr0py <[email protected]>
Date:   Fri Aug 30 03:54:49 2024 +0800

    [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954)

commit c334b18
Author: kushanam <[email protected]>
Date:   Thu Aug 29 12:15:04 2024 -0700

    extend cuda graph size for H200 (vllm-project#7894)

    Co-authored-by: youkaichao <[email protected]>

commit 6b34215
Author: Pavani Majety <[email protected]>
Date:   Thu Aug 29 11:53:11 2024 -0700

    [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend.  + BugFix for kv_cache_dtype=auto (vllm-project#7985)

    Co-authored-by: Simon Mo <[email protected]>
    Co-authored-by: Cody Yu <[email protected]>

commit 3f60f22
Author: Alexander Matveev <[email protected]>
Date:   Thu Aug 29 14:18:26 2024 -0400

    [Core] Combine async postprocessor and multi-step (vllm-project#7921)

commit f205c09
Author: Jonas M. Kübler <[email protected]>
Date:   Thu Aug 29 07:18:13 2024 +0200

    [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899)

commit ef99a78
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 21:27:06 2024 -0700

    Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982)

commit 74d5543
Author: Peter Salas <[email protected]>
Date:   Wed Aug 28 20:24:31 2024 -0700

    [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974)

commit a7f65c2
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 17:32:26 2024 -0700

    [torch.compile] remove reset (vllm-project#7975)

commit 4289cad
Author: Nick Hill <[email protected]>
Date:   Wed Aug 28 17:22:43 2024 -0700

    [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957)

    Co-authored-by: Robert Shaw <rshaw@neuralmagic>

commit af59df0
Author: Michael Goin <[email protected]>
Date:   Wed Aug 28 19:19:17 2024 -0400

    Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961)

commit ce6bf3a
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 16:10:12 2024 -0700

    [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898)

    Co-authored-by: Woosuk Kwon <[email protected]>

commit 3cdfe1f
Author: bnellnm <[email protected]>
Date:   Wed Aug 28 18:11:49 2024 -0400

    [Bugfix] Make torch registration of punica ops optional (vllm-project#7970)

commit fdd9daa
Author: Mor Zusman <[email protected]>
Date:   Thu Aug 29 01:06:52 2024 +0300

    [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651)

commit 8c56e57
Author: Stas Bekman <[email protected]>
Date:   Wed Aug 28 13:54:23 2024 -0700

    [Doc] fix 404 link (vllm-project#7966)

commit eeffde1
Author: Woosuk Kwon <[email protected]>
Date:   Wed Aug 28 13:10:21 2024 -0700

    [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967)

commit e5697d1
Author: rasmith <[email protected]>
Date:   Wed Aug 28 14:37:47 2024 -0500

    [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386)

commit b98cc28
Author: Pavani Majety <[email protected]>
Date:   Wed Aug 28 10:01:22 2024 -0700

    [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798)

    Co-authored-by: Simon Mo <[email protected]>

commit ef9baee
Author: Cyrus Leung <[email protected]>
Date:   Wed Aug 28 23:11:18 2024 +0800

    [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948)

commit 98c12cf
Author: Stas Bekman <[email protected]>
Date:   Wed Aug 28 05:12:32 2024 -0700

    [Doc] fix the autoAWQ example (vllm-project#7937)

commit f52a43a
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 01:27:07 2024 -0700

    [ci][test] fix pp test failure (vllm-project#7945)

commit e358053
Author: Cody Yu <[email protected]>
Date:   Wed Aug 28 00:36:31 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
@Patrick10203
Copy link

When building the docker file from the main branch and starting the finished build i get this error:
Traceback (most recent call last): File "<frozen runpy>", line 189, in _run_module_as_main File "<frozen runpy>", line 112, in _get_module_details File "/usr/local/lib/python3.12/dist-packages/vllm/__init__.py", line 3, in <module> from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 11, in <module> from vllm.config import (CacheConfig, ConfigFormat, DecodingConfig, File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 12, in <module> from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/__init__.py", line 3, in <module> from vllm.model_executor.sampling_metadata import (SamplingMetadata, File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/sampling_metadata.py", line 11, in <module> from vllm.triton_utils.sample import get_num_triton_sampler_splits File "/usr/local/lib/python3.12/dist-packages/vllm/triton_utils/__init__.py", line 7, in <module> from vllm.triton_utils.custom_cache_manager import ( File "/usr/local/lib/python3.12/dist-packages/vllm/triton_utils/custom_cache_manager.py", line 3, in <module> from triton.runtime.cache import (FileCacheManager, default_cache_dir, File "/usr/local/lib/python3.12/dist-packages/triton/__init__.py", line 8, in <module> from .runtime import ( File "/usr/local/lib/python3.12/dist-packages/triton/runtime/__init__.py", line 1, in <module> from .autotuner import (Autotuner, Config, Heuristics, autotune, heuristics) File "/usr/local/lib/python3.12/dist-packages/triton/runtime/autotuner.py", line 9, in <module> from ..testing import do_bench, do_bench_cudagraph File "/usr/local/lib/python3.12/dist-packages/triton/testing.py", line 7, in <module> from . import language as tl File "/usr/local/lib/python3.12/dist-packages/triton/language/__init__.py", line 4, in <module> from . import math File "/usr/local/lib/python3.12/dist-packages/triton/language/math.py", line 1, in <module> from . import core File "/usr/local/lib/python3.12/dist-packages/triton/language/core.py", line 10, in <module> from ..runtime.jit import jit File "/usr/local/lib/python3.12/dist-packages/triton/runtime/jit.py", line 12, in <module> from ..runtime.driver import driver File "/usr/local/lib/python3.12/dist-packages/triton/runtime/driver.py", line 1, in <module> from ..backends import backends File "/usr/local/lib/python3.12/dist-packages/triton/backends/__init__.py", line 50, in <module> backends = _discover_backends() ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/triton/backends/__init__.py", line 44, in _discover_backends driver = _load_module(name, os.path.join(root, name, 'driver.py')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/triton/backends/__init__.py", line 12, in _load_module spec.loader.exec_module(module) File "/usr/local/lib/python3.12/dist-packages/triton/backends/nvidia/driver.py", line 7, in <module> from triton.runtime.build import _build File "/usr/local/lib/python3.12/dist-packages/triton/runtime/build.py", line 8, in <module> import setuptools File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 5, in <module> import distutils.core ModuleNotFoundError: No module named 'distutils'

Is this because the package is removed in python 3.12? See here: https://docs.python.org/3.10/library/distutils.html
I see that the setuptools is now installed which should solve this issue but somehow it doesnt for me...

@youkaichao
Copy link
Member

cc @joerunde

@Patrick10203
Copy link

Patrick10203 commented Sep 11, 2024

I have some more infos for this problem. When i launched the docker image with bash and execute pip list i can see that setuptools has version 45.2.0. When i execute "pip install setuptools==74.1.2" and then start vllm server, it works. When i only do pip install setuptools it doesnt work. I dont understand how it didnt install the newer version of setuptools when building the docker file.

I also double checked my Dockerfile i have git pulled and it contains this line:
RUN --mount=type=cache,target=/root/.cache/pip
python3 -m pip install "setuptools>=74.1.1"

dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request Sep 12, 2024
@thies1006
Copy link

Apparently I have exactly the same issue.

@joerunde
Copy link
Contributor Author

Ah shoot, sorry all. @mgoin was right, I should've just added that to the common requirements :(

I also double checked my Dockerfile i have git pulled and it contains this line:
RUN --mount=type=cache,target=/root/.cache/pip
python3 -m pip install "setuptools>=74.1.1"

That's only in the test stage, so it doesn't make it to the final image. I added it there because not having it caused an error installing test dependencies.

Is this a regression for the latest released images? Or only for images manually built directly off main?
I'll start on a fix

Jeffwan pushed a commit to aibrix/vllm that referenced this pull request Sep 19, 2024
siddharth9820 pushed a commit to axonn-ai/vllm that referenced this pull request Sep 30, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants